71 research outputs found

    Improving program comprehension tools for domain specific languages

    Get PDF
    Dissertação de Mestrado em InformáticaSince the dawn of times, curiosity and necessity to improve the quality of their life, led humans to find means to understand everything surrounding them, aiming at improving it. Whereas the creating abilities of some was growing, the capacity to comprehend of others follow their steps. Disassembling physical objects to comprehend the connections between the pieces in order to understand how they work together is a common human behavior. With the computers arrival, humans felt the necessity of applying the same techniques (disassemble to comprehend) to their programs. Traditionally, these programs are written resorting to general-purpose programming languages. Hence, techniques and artifacts, used to aid on program comprehension, were built to facilitate the work of software programmers on maintaining and improving programs that were developed by others. Generally, these generic languages deal with concepts at a level that the human brain can hardly understand. So understanding programs written in this languages is an hard task, because the distance between the concepts at the program level and the concepts at the problem level is too big. Thus, as in politics, justice, medicine, etc. groups of words are regularly used facilitating the comprehension between people, also in programming, languages that address a specific domain were created. These programming languages raise the abstraction of the program domain, shortening the gap to the concepts of the problem domain. Tools and techniques for program comprehension commonly address the program domain and they took little advantage of the problem domain. In this master’s thesis, the hypothesis that it is easier to comprehend a program when the underlying problem and program domains are known and a bridge between them is established, is assumed. Then, a program comprehension technique for domain specific languages, is conceived, proposed and discussed. The main objective is to take advantage from the large knowledge about the problem domain inherent to the domain specific language, and to improve traditional program comprehension tools that only dealt, until then, with the program domain. This will create connections between both program and problem domains. The final result will show, visually, what happens internally at the program domain level, synchronized with what happens externally, at problem level.Desde o início dos tempos a curiosidade e a necessidade de melhorar a qualidade de vida impeliram o humano a arranjar meios para compreender o que o rodeia com o objectivo de melhorar. À medida que a habilidade de uns foi aumentando, a capacidade de compreensão de outros seguiu-lhe os passos. Desmontar algo físico de modo a compreender as ligações entre as peças e assim perceber como funcionam num todo, é um acto bastante normal dos humanos. Com o advento dos computadores e os programas para ele codificados, o homem sentiu a necessidade de aplicar as mesmas técnicas (desmontar para compreender) ao código desses programas. Tradicionalmente, a codificação de tais programas é feita usando linguagens genéricas de programação. Desde logo técnicas e artefactos que ajudam na compreensão desses programas (nessas linguagens) foram produzidas para auxiliar o trabalho de engenheiros de software que necessitam de manter ou alterar programas previamente construídos por outros. De um modo geral estas linguagens mais genéricas lidam com conceitos a um nível bastante abaixo daquele que o cérebro humano, facilmente, consegue captar. Previsivelmente, compreender programas neste tipo de linguagens é uma tarefa complexa pois a distância entre os conceitos ao nível do programa e os conceitos ao nível do problema (que o programa aborda) é bastante grande. Deste modo, tal como no dia-a-dia foram surgindo nichos como a política, a justiça, a informática, etc. onde grupos de palavras são usadas com maior regularidade para facilitar a compreensão entre as pessoas, também na programação foram surgindo linguagens que focam em domínios específicos, aumentando a abstracção em relação ao nível do programa, aproximando este do nível dos conceitos subjacentes ao problema. Ferramentas e técnicas de compreensão de programas abordam, geralmente, o domínio do programa, tirando pouco partido do domínio do problema. Na presente tese assume-se a hipótese de que será mais fácil compreender um programa quando os domínios do problema e do programa são conhecidos, e entre eles é estabelecida uma ponte de ligação; e parte-se em busca de uma técnica de compreensão de programas para linguagens de domínio específico, baseada em técnicas já conhecidas para linguagens de carácter geral. O objectivo prende-se com aproveitar o conhecimento sobre o domínio do problema e melhorar as ferramentas de compreensão de programas existentes para as linguagens genéricas, de forma a estabelecer ligações entre domínio do programa e domínio do problema. O resultado será mostrar, visualmente, o que acontece internamente ao nível do programa, sincronizadamente com o que acontece externamente ao nível do problema

    An enhanced model for stochastic coordination

    Get PDF
    Publicado em "Proceedings of the First International Workshop on Formal Methods for and on the Cloud, iFMCloud@IFM 2016, Reykjavik, Iceland, 4th June 2016"Applications developed over the cloud coordinate several, often anonymous, computational resources, distributed over different execution nodes, within flexible architectures. Coordination models able to represent quantitative data provide a powerful basis for their analysis and validation. This paper extends IMCreo, a semantic model for Stochastic reo based on interactive Markov chains, to enhance its scalability, by regarding each channel and node, as well as interface components, as independent stochastic processes that may (or may not) synchronise with the rest of the coordination circuit.Luis S. Barbosa is supported by grant SFRH/BSAB/113890/2015 from FCT, the Portuguese Foundation for Science and Tecnhology. This research is financed by the ERDF COMPETE 2020 Programme within project POCI-01-0145-FEDER-00696, and by National Funds through FCT as part of project UID/EEA/50014/2013

    A realistic scooter rebalancing system via metaheuristics

    Get PDF
    This paper addresses a realistic electric scooter rebalancing task that includes business rules (e.g., time constraints, available truck fleet). We explore an integer encoding approach and three metaheuristics (hill climbing, simulated annealing, genetic algorithm), discussing the obtained results, current limitations and future work directions.FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/202

    The impact of microblogging data for stock market prediction: Using Twitter to predict returns, volatility, trading volume and survey sentiment indices

    Get PDF
    In this paper, we propose a robust methodology to assess the value of microblogging data to forecast stock market variables: returns, volatility and trading volume of diverse indices and portfolios. The methodology uses sentiment and attention indicators extracted from microblogs (a large Twitter dataset is adopted) and survey indices (AAII and II, USMC and Sentix), diverse forms to daily aggregate these indicators, usage of a Kalman Filter to merge microblog and survey sources, a realistic rolling windows evaluation, several Machine Learning methods and the Diebold-Mariano test to validate if the sentiment and attention based predictions are valuable when compared with an autoregressive baseline. We found that Twitter sentiment and posting volume were relevant for the forecasting of returns of S&P 500 index, portfolios of lower market capitalization and some industries. Additionally, KF sentiment was informative for the forecasting of returns. Moreover, Twitter and KF sentiment indicators were useful for the prediction of some survey sentiment indicators. These results confirm the usefulness of microblogging data for financial expert systems, allowing to predict stock market behavior and providing a valuable alternative for existing survey measures with advantages (e.g., fast and cheap creation, daily frequency).This work was supported by FCT - Fundacao para a Ciencia e Tecnologia within the Project Scope UID/CEC/00319/2013. We wish to thank the anonymous reviewers for their helpful comments.info:eu-repo/semantics/publishedVersio

    Choosing grammars to support language processing courses

    Get PDF
    Teaching Language Processing courses is a hard task. The level of abstraction inherent to some of the basic concepts in the area and the technical skills required to implement efficient processors are responsible for the number of students that do not learn the subject and do not succeed to finish the course. In this paper we intend to list the main concepts involved in Language Processing subject, and identify the skills required to learn them. In this context, it is feasible to identify the difficulties that lead students to fail. This enables us to suggest some pragmatic ways to overcome those troubles. We will focus on the grammars suitable to motivate students and help them to learn easily the basic concepts. After identifying the characteristics of such grammars, some examples are presented to make concrete and clear our proposal. The contribution of this paper is the systematic way we approach the process of teaching Language Processing courses towards a successful learning activity.(undefined

    Forward-central two-particle correlations in p-Pb collisions at root s(NN)=5.02 TeV

    Get PDF
    Two-particle angular correlations between trigger particles in the forward pseudorapidity range (2.5 2GeV/c. (C) 2015 CERN for the benefit of the ALICE Collaboration. Published by Elsevier B. V.Peer reviewe

    Event-shape engineering for inclusive spectra and elliptic flow in Pb-Pb collisions at root(NN)-N-S=2.76 TeV

    Get PDF
    Peer reviewe

    Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at root s(NN)=2.76TeV

    Get PDF
    The elliptic flow, v(2), of muons from heavy-flavour hadron decays at forward rapidity (2.5 <y <4) is measured in Pb-Pb collisions at root s(NN)= 2.76TeVwith the ALICE detector at the LHC. The scalar product, two- and four-particle Q cumulants and Lee-Yang zeros methods are used. The dependence of the v(2) of muons from heavy-flavour hadron decays on the collision centrality, in the range 0-40%, and on transverse momentum, p(T), is studied in the interval 3 <p(T)<10 GeV/c. A positive v(2) is observed with the scalar product and two-particle Q cumulants in semi-central collisions (10-20% and 20-40% centrality classes) for the p(T) interval from 3 to about 5GeV/c with a significance larger than 3 sigma, based on the combination of statistical and systematic uncertainties. The v(2) magnitude tends to decrease towards more central collisions and with increasing pT. It becomes compatible with zero in the interval 6 <p(T)<10 GeV/c. The results are compared to models describing the interaction of heavy quarks and open heavy-flavour hadrons with the high-density medium formed in high-energy heavy-ion collisions. (C) 2015 CERN for the benefit of the ALICE Collaboration. Published by Elsevier B.V.Peer reviewe

    Measurement of charged jet production cross sections and nuclear modification in p-Pb collisions at root s(NN)=5.02 TeV

    Get PDF
    Charged jet production cross sections in p-Pb collisions at root s(NN) = 5.02 TeV measured with the ALICE detector at the LHC are presented. Using the anti-k(T) algorithm, jets have been reconstructed in the central rapidity region from charged particles with resolution parameters R = 0.2 and R = 0.4. The reconstructed jets have been corrected for detector effects and the underlying event background. To calculate the nuclear modification factor, R-pPb, of charged jets in p-Pb collisions, a pp reference was constructed by scaling previously measured charged jet spectra at root s = 7 TeV. In the transverse momentum range 20Peer reviewe

    Reconfiguração arquitetural da interação de serviços

    No full text
    Tese doutoramento Doctoral Programme in Computer Science MAP-iThe exponential growth of information technology users and the rising of their expectations imposed a paradigmatic change in the way software systems are developed. From monolithic to modular, from centralised to distributed, from static to dynamic. Software systems are nowadays regarded as coordinated compositions of several computational blocks, distributed by different execution nodes, within flexible and dynamic architectures. They are not flawless, though. Moreover, execution nodes may fail, new requirements may become necessary, or the deployment environment may evolve in such a way that measures of quality of service of the system become degraded. Reconfiguring, repairing, adapting, preferably in a dynamic way, became, thus, relevant issues for the software architect. But, developing such systems right is still a challenge. In particular, current (formal) methods for characterising and analysing contextual changes and reconfiguration strategies fall behind the engineering needs. This thesis formalises a framework, referred to as aris, for modelling and analysing architectural reconfigurations. The focus is set on the coordination layer, understood in the context of the Reo model, as it plays the key role in defining the behaviour of compositional systems. Therefore, it proposes a notion of a Coordination Pattern, as a graph-based model of the coordination layer; and of Reconfiguration Patterns, as parametric operations inducing topological changes in coordination patterns. Properties of reconfigurations can be stated and evaluated from two different perspectives: behavioural and structural. The former compares the behavioural semantics of the reconfigured system based on whatever semantic model one associates to coordination patterns. The latter focuses on the graph topology of the coordination pattern. Properties are expressed in a propositional hybrid logic, referring to the actual connectivity expressed in that graph. To bring quality of service into the picture, the thesis also contributes with a new semantic model for stochastic Reo, based on interactive Markov chains. This opens new possibilities for analysis of both coordination patterns and reconfigurations. In particular for inspecting the effects of reconfigurations in the system’s quality of service, or determining reconfiguration triggers, based on the variations of the latter. Another contribution of the thesis is the integration of aris in a monitoring strategy that enables self-adaptation and attempts to deliver it as a service in a cloud environment. Tools are delivered to support aris. In particular, language-based technology to encode, transform and analyse coordination and reconfiguration patterns, materialises it in a dedicated editor. All the above mentioned contributions are assessed through a case study where a static system is worked out to support self-adaptation.O crescimento exponencial de utilizadores de tecnologias de informação e o aumento das suas expetativas, impuseram uma mudamça paradigmática na maneira como os sistemas de software são desenvolvidos. De monolíticos para modulares, de centralizados para distribuídos, de estáticos para dinâmicos. Os sistemas de software são, hoje, tidos como a composição coordenada de vários blocos de computação, distribuídos por diferentes nodos de execução, com arquiteturas flexíveis e dinâmicas. Não são infalíveis, porém. Nodos de execução podem falhar, novos requisitos podem tornar-se necessários, ou o ambiente de produção pode evoluir de tal modo que medidas de qualidade de serviço se podem degradar. Reconfigurar, reparar, adaptar, de preferência dinamicamente, tornaram-se assim conceitos importantes para o arquiteto de software. Mas desenvolver estes sistemas corretamente é ainda um desafio. Em particular, os atuais métodos (formais) para a caraterização e análise contextual, e para a estratégias de reconfiguração estão aquém das necessidades da engenharia. Esta tese formaliza uma framework, denominada aris, para a modelação e análise de reconfigurações arquiteturais. A atenção é focada na camada de coordenação, vista sob o prisma do modelo de coordenação Reo, dado que esta desempenha um papel chave na definição do comportamento de sistemas compostos. Assim, é proposta uma noção de Padrão de Coordenação, como sendo um modelo da camada de coordenação baseado em grafos; e outra de Padrão de Reconfiguração, como sendo operações parametrizáveis que produzem alterações topológicas nos padrões de coordenação. Propriedades das reconfigurações podem ser expressas e avaliadas sob duas perspetivas diferentes: comportamental e estrutural. A primeira compara a semântica de comportamento do sistema reconfigurado com base num modelo semântico qualquer escolhido para associar aos padrões de coordenação. O último foca-se na estrutura topológica do padrão de coordenação. Propriedades são expressas numa lógica híbrida proposicional, referindo-se à conetividade capturada em tal grafo de estrutura. Fazendo a qualidade de serviço entrar em cena, a tese contribui também com um novo modelo semântico para Reo estocástico, baseado em cadeias de Markov interativas. Este modelo proporciona novas possibilidades para análise de padrões de coordenação e reconfiguração. Em particular, para investigar os efeitos das reconfigurações na qualidade de serviço do sistema, ou para determinar pontos de reconfiguração com base nas variações da mesma. Outro contributo da tese é a integração de aris numa estratégia de monitorização que habilita a auto-adaptação, e define um marco na tentativa de a entregar como um serviço em ambientes Cloud. Ferramentas são disponibilizadas para suporte a aris. Emparticular, tecnologia baseada em linguagens para descrever, transformar e analisar padrões de coordenação e reconfiguração, materializa a framework num editor dedicado. Todas as contribuições mencionadas acima são avaliadas através de um caso de estudo, onde um sistema estático é trabalhado para suportar a sua auto-adaptação.Fundação para a Ciência e Tecnologia (FCT) através de uma Bolsa de Doutoramento com referência SFRH/BD/71475/2010
    corecore